Goto

Collaborating Authors

 markov property





Colored Markov Random Fields for Probabilistic Topological Modeling

Marinucci, Lorenzo, Di Nino, Leonardo, D'Acunto, Gabriele, Pandolfo, Mario Edoardo, Di Lorenzo, Paolo, Barbarossa, Sergio

arXiv.org Machine Learning

Probabilistic Graphical Models (PGMs) encode conditional dependencies among random variables using a graph -nodes for variables, links for dependencies- and factorize the joint distribution into lower-dimensional components. This makes PGMs well-suited for analyzing complex systems and supporting decision-making. Recent advances in topological signal processing highlight the importance of variables defined on topological spaces in several application domains. In such cases, the underlying topology shapes statistical relationships, limiting the expressiveness of canonical PGMs. To overcome this limitation, we introduce Colored Markov Random Fields (CMRFs), which model both conditional and marginal dependencies among Gaussian edge variables on topological spaces, with a theoretical foundation in Hodge theory. CMRFs extend classical Gaussian Markov Random Fields by including link coloring: connectivity encodes conditional independence, while color encodes marginal independence. We quantify the benefits of CMRFs through a distributed estimation case study over a physical network, comparing it with baselines with different levels of topological prior.




Characterization and Learning of Causal Graphs with Latent Confounders and Post-treatment Selection from Interventional Data

Luo, Gongxu, Li, Loka, Chen, Guangyi, Dai, Haoyue, Zhang, Kun

arXiv.org Artificial Intelligence

Interventional causal discovery seeks to identify causal relations by leveraging distributional changes introduced by interventions, even in the presence of latent confounders. Beyond the spurious dependencies induced by latent confounders, we highlight a common yet often overlooked challenge in the problem due to post-treatment selection, in which samples are selectively included in datasets after interventions. This fundamental challenge widely exists in biological studies; for example, in gene expression analysis, both observational and interventional samples are retained only if they meet quality control criteria (e.g., highly active cells). Neglecting post-treatment selection may introduce spurious dependencies and distributional changes under interventions, which can mimic causal responses, thereby distorting causal discovery results and challenging existing causal formulations. To address this, we introduce a novel causal formulation that explicitly models post-treatment selection and reveals how its differential reactions to interventions can distinguish causal relations from selection patterns, allowing us to go beyond traditional equivalence classes toward the underlying true causal structure. We then characterize its Markov properties and propose a Fine-grained Interventional equivalence class, named FI-Markov equivalence, represented by a new graphical diagram, F-PAG. Finally, we develop a provably sound and complete algorithm, F-FCI, to identify causal relations, latent confounders, and post-treatment selection up to $\mathcal{FI}$-Markov equivalence, using both observational and interventional data. Experimental results on synthetic and real-world datasets demonstrate that our method recovers causal relations despite the presence of both selection and latent confounders.


A Organization of the Appendix

Neural Information Processing Systems

We remind the reader of some standard facts about kernel ridge regression and the Gaussian/RBF kernel -- see (Shalev-Shwartz and Ben-David 2014) for a reference.


Tractable Latent State Inference for Hidden Continuous-Time semi-Markov Chains Supplement

Neural Information Processing Systems

We will first replicate an equation similar to (20) for the backward case. The derivation is similar to that of the forward equation, so that it uses a combination of equations (16), (18) and (19) while leaving out the observation likelihood function. The combination is again carried out using the Laplace transform.